Added missing sentiment classifier, fixes #1#2
Added missing sentiment classifier, fixes #1#2haotian1028 wants to merge 6 commits intosailuh:mainfrom
Conversation
- Added API folder - Train and Test notebook for each part of the process, to break it up
- Cleaned up imports
carlosparadis
left a comment
There was a problem hiding this comment.
Thank you for moving the code around to the api folder. Only a few minor changes. When you looked at the original file, were there pip commands somewhere?
I am not seeing anything that install the dependencies. Could you create an env.yml file here without using the export? (see my remark on env.yml file on the process mining project)
api/train.py
Outdated
There was a problem hiding this comment.
Can you combine this to test.py and rename to model.py?
| "id": "RAXtSnSK4LPr" | ||
| }, | ||
| "source": [ | ||
| "# Tokenlized\n", |
There was a problem hiding this comment.
rename file to tokenize_statistics.ipynb
There was a problem hiding this comment.
Note to self: This notebook does not tokenize anything currently, it just adds a column with 1s associated to the tokenizer. Not sure why, will figure out in the future.
carlosparadis
left a comment
There was a problem hiding this comment.
We confirmed that the only files that will trully be needed on this notebook are:
- so-dataset.csv
- gh-dataset.csv
- crossplatform_sf_dataset.csv
and also the one column ones:
- so-dataset_tokenized.csv
- gh-dataset_tokenized.csv
- crossplatform_sf_dataset_tokenized.csv
There was a problem hiding this comment.
Note to self: This filter.py functions are not used in the entire original notebook, and therefore it is not used in these 3 refactored notebooks.
There was a problem hiding this comment.
Note to self: This tokenizer.py functions are not used in the entire original notebook, and therefore it is not used in these 3 refactored notebooks.
There was a problem hiding this comment.
Note to self: This notebook does not tokenize anything currently, it just adds a column with 1s associated to the tokenizer. Not sure why, will figure out in the future.
|
So just to confirm I just need your pips if possible as a env.yml and to merge the api files of train.py and test.py to model.py |
- train.py and test.py now exist in model.py - env added for required packages - Minor typo changes Signed-off-by: Connor Narowetz <cnarowetz@gmail.com>
- __init__.py added - docs for model.py added Signed-off-by: Connor Narowetz <cnarowetz@gmail.com>
Pdocs Attached |
|
@connorn-dev thank you for remembering this! |
In api/model.py, added functions: - train_model() - predict_model() Haotians original train_model() and test_model() commented out. Added exec/train_or_predict.py: - Python script to be syscalled from Kaiaulu - Calls train_model() and predict_model()
No description provided.